Image Based 3D Reconstruction
Saad M. Khan,
Pingkun Yan
In this project we developed a purely image-based approach to fusing foreground silhouette information from multiple arbitrary views for applications like 3D reconstruction. Our approach does not require 3D constructs like camera calibration to carve out 3D voxels or project visual cones in 3D space. Using planar homographies and foreground likelihood information from a set of arbitrary views, we show that visual hull intersection can be performed in the image plane without requiring to go in 3D space. This process delivers a 2D grid of object occupancy likelihoods representing a cross-sectional slice of the object. Subsequent slices of the object are obtained by extending the process to planes parallel to a reference plane in a direction along the body of the object (see figure 1). Figure 2 shows the 3D reconstruction results. A detailed narration video describing the work can be downloaded from the link below.
Figure 2 shows the 3D reconstruction results. A detailed narration
video describing the work can be downloaded from the link below.
Figure 1: Visual hull intersection at
multiple planes in the scene. Each deliver a slice of the object.
(a)
(b)
Figure 2: The 3D reconstruction results from
some of our experiments. The images on the right are the zoomed in
views of the reconstructed objects. Notice the slices.
Associated publication: Saad M.
Khan, Pingkun Yan, Mubarak Shah, A
Homographic Framework for the Fusion of Multi-view Silhouettes,
International Conference of Computer Vision, Rio de Janeiro, Brazil,
2007.
Narration Video Download (100 MB)
Results Videos (.zip file 27 MB)
|